question style
Appendix Contents
Every moral scenario consists of a triple ( context, action 1, action 2) and a set of auxiliary labels. The actions describe two possible actions in the first-person (e.g., The moral scenarios can be categorized into: 1. MoralChoice-LowAmbiguity The LLM-assisted construction (i.e., zero-and few-shot prompting setups) of the scenarios is grounded Category Rule Refined Rule Description Do not harm Do not kill Do not kill (i.e., do not cause permanent loss of consciousness). Do not cause pain Do not cause physical or emotional pain or unpleasant feelings (e.g., anger, sadness) to someone. Do not disable Do not deprive someone of their physical, mental or volitional ability (e.g. Do not deprive of freedom Do not deprive someone of their freedom (i.e., make a person unable to do something by altering the person's environment or situation).
- Oceania > New Zealand (0.04)
- Oceania > Australia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada (0.04)
Appendix Contents
Every moral scenario consists of a triple ( context, action 1, action 2) and a set of auxiliary labels. The actions describe two possible actions in the first-person (e.g., The moral scenarios can be categorized into: 1. MoralChoice-LowAmbiguity The LLM-assisted construction (i.e., zero-and few-shot prompting setups) of the scenarios is grounded Category Rule Refined Rule Description Do not harm Do not kill Do not kill (i.e., do not cause permanent loss of consciousness). Do not cause pain Do not cause physical or emotional pain or unpleasant feelings (e.g., anger, sadness) to someone. Do not disable Do not deprive someone of their physical, mental or volitional ability (e.g. Do not deprive of freedom Do not deprive someone of their freedom (i.e., make a person unable to do something by altering the person's environment or situation).
- Oceania > New Zealand (0.04)
- Oceania > Australia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada (0.04)
Synthetic Multimodal Question Generation
Wu, Ian, Jayanthi, Sravan, Viswanathan, Vijay, Rosenberg, Simon, Pakazad, Sina, Wu, Tongshuang, Neubig, Graham
Multimodal Retrieval Augmented Generation (MMRAG) is a powerful approach to questionanswering over multimodal documents. A key challenge with evaluating MMRAG is the paucity of high-quality datasets matching the question styles and modalities of interest. In light of this, we propose SMMQG, a synthetic data generation framework. SMMQG leverages interplay between a retriever, large language model (LLM) and large multimodal model (LMM) to generate question and answer pairs directly from multimodal documents, with the questions conforming to specified styles and modalities. We use SMMQG to generate an MMRAG dataset of 1024 questions Figure 1: An overview of SMMQG. Given userprovided over Wikipedia documents and evaluate stateof-the-art question style and modality requirements, SMmodels using it, revealing insights MQG selects question sources and produces questions into model performance that are attainable only and answers. The questions are grounded in the selected through style-and modality-specific evaluation question sources, and adhere to the question and modality data. Next, we measure the quality of data produced requirements.
- North America > United States > Wisconsin > Milwaukee County > Milwaukee (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Oceania > Australia (0.04)
- (12 more...)
- Media (1.00)
- Leisure & Entertainment > Sports > Tennis (1.00)
- Leisure & Entertainment > Sports > Football (1.00)
- Government (0.68)